An Adaptive Approach for Downloading Replicated Web Resources

نویسنده

  • Vittorio Ghini
چکیده

The success of a Web service is largely dependent on its responsiveness (i.e., its availability and timeliness) in the delivery of the information its users (clients) require. A practical approach to the provision of responsive Web services is based on introducing redundancy in the service by replicating the service itself across a number of servers geographically distributed over the Internet. Provided that the replica servers be maintained mutually consistent, service responsiveness can be guaranteed by dynamically binding the client to the most convenient replica (e.g., the nearest, lightly loaded, available replica; the available replica with the least congested connection to the client). Based on this approach, we have developed a software mechanism [GHI01] that meets effectively the responsiveness requirement mentioned above. In essence, this mechanism, rather than binding a client to its most convenient replica server, engages all the available replicas in supplying a fragment of the Web document that client requires. The size of the fragment a replica is requested to supply is dynamically evaluated on the basis of the response time that replica can provide its client with. In addition, the proposed mechanism can dynamically adapt to changes in both the network and the replica servers status, thus tolerating possible replica or communication failures that may occur at runtime. Our mechanism can be implemented either as part of the browser software or as part of a Proxy server. In this paper, we describe the design, the development, and the performance evaluation of both these implementations of our mechanism. The performance results we have obtained from our evaluation exercise illustrate the adequacy of the mechanism we propose, in order to provide responsive Web services.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Prioritize the ordering of URL queue in Focused crawler

The enormous growth of the World Wide Web in recent years has made it necessary to perform resource discovery efficiently. For a crawler it is not an simple task to download the domain specific web pages. This unfocused approach often shows undesired results. Therefore, several new ideas have been proposed, among them a key technique is focused crawling which is able to crawl particular topical...

متن کامل

Client-Centered Load Distribution: A Mechanism for Constructing Responsive Web Services

n a downloading mechanism, devoted to replicated Web services n implemented at the browser site n requires fragments of documents from different replicas, dynamically n provides the user with timely responses and high availability n experiments validate the effectiveness

متن کامل

Global Distribution of Free Software (and other things)

The Globe Distribution Network (GDN) is a distributed system designed to support the secure distribution of free software. Software packages are encapsulated into distributed objects that implement their own strategy for replicating state. This approach allows each package to be replicated in a way that best handles client demands or optimizes usage of network resources. There is no single glob...

متن کامل

Enhancing Duplicate Collection Detection Through Replica Boundary Discovery

Web documents are widely replicated on the Internet. These replicated documents bring potential problems to Web based information systems. So replica detection on the Web is an indispensable task. The challenge is to find these duplicated collections from a very large data set with limited hardware resources in acceptable time. In this paper, we first introduce the notion of replica boundary to...

متن کامل

A Comparative Study of Performance of Adaptive Web Sampling and General Inverse Adaptive Sampling in Estimating Olive Production in Iran

Nowadays, there is an increasing use of sampling methods in network and spatial populations. Although the most common link-tracing designs such as adaptive cluster sampling and snowball sampling have advantages over conventional sampling designs such as simple random sampling and cluster sampling, these designs still present many drawbacks. Adaptive web sampling is a new link-tracing design tha...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001